Learning Heteroscedastic Models by Convex Programming under Group Sparsity

ثبت نشده
چکیده

and ν◦ tRt,:α ◦ = 0 for every t. It should be emphasized that relation (18) holds true only in the case where the solution satisfies mink |X :,Gkφ ◦ :,Gk |2 6= 0, otherwise one has to replace it by the condition stating that the null vector belongs to the subdifferential. Since this does not alter the proof, we prefer to proceed as if everything was differentiable. On the one hand, (φ◦,α◦) satisfies (18) if and only if ΠGk(diag(Y )Rα ◦ − Xφ◦) = λkX:,Gkφ ◦ Gk /|X:,Gkφ ◦ Gk |2 with ΠGk = X:,Gk(X > :,Gk X:,Gk) X:,Gk being the orthogonal projector onto the range of X:,Gk in R T . Taking the norm of both sides in the last equation, we get ∣∣ΠGk(diag(Y )Rα◦ −Xφ◦)∣∣2 ≤ λk. This tells us that (φ◦,α◦) satisfy (7). On the other hand, since the minimum of (6) is finite, one easily checks that Rt,:α ◦ 6= 0 and, therefore, ν◦ = 0. Replacing in (19) ν◦ by zero and setting v◦ t = 1/Rt,:α ◦ we get that (φ◦,α◦,v◦) satisfies (8), (9). This proves that the set of feasible solutions of the optimization problem defined in the ScHeDs is not empty.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Heteroscedastic Models by Convex Programming under Group Sparsity

Popular sparse estimation methods based on `1-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter. This constitutes a major obstacle in applying these methods in several frameworks—such as time series, random fields, inverse problems—for which the noise is rarely homoscedastic and its l...

متن کامل

Convex Optimization with Mixed Sparsity-inducing Norm

Sparsity-inducing norm has been a powerful tool for learning robust models with limited data in high dimensional space. By imposing such norms as constraints or regularizers in an optimization setting, one could bias the model towards learning sparse solutions, which in many case have been proven to be more statistically efficient [Don06]. Typical sparsityinducing norms include `1 norm [Tib96] ...

متن کامل

Analysis of Fast Alternating Minimization for Structured Dictionary Learning

Methods exploiting sparsity have been popular in imaging and signal processing applications including compression, denoising, and imaging inverse problems. Data-driven approaches such as dictionary learning enable one to discover complex image features from datasets and provide promising performance over analytical models. Alternating minimization algorithms have been particularly popular in di...

متن کامل

Structured Sparsity: Discrete and Convex approaches

Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional s...

متن کامل

A Fast Algorithm for Separated Sparsity via Perturbed Lagrangians

Sparsity-based methods are widely used in machine learning, statistics, and signal processing. Thereis now a rich class of structured sparsity approaches that expand the modeling power of the sparsityparadigm and incorporate constraints such as group sparsity, graph sparsity, or hierarchical sparsity. Whilethese sparsity models offer improved sample complexity and better interpr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013